Home > News > Hot News

Out-of-control AI Race Could Threaten Humanity

| 2023.04.20

An open letter signed by some of the biggest names in the tech world is attracting global attention and heated discussion about AI ethics.

1.png

PHOTO: VCG

In recent months, AI labs are  out-of-control in their race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control, said the letter.

 

In the letter titled Pause Giant AI Experiments: An Open Letter, the signatories said due to fears that, "AI systems with human-competitive intelligence can pose profound risks to society and humanity," they called on "all AI labs to immediately pause, for at least six months, the training of AI systems more powerful than GPT-4."

 

Published by the Future of Life Institute, the letter has managed to attract more than 15,500 signatures including Space X & Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, and prominent AI researcher Stuart Russell.

 

After being released, the letter sparked immediate controversy. Some critics argue that the letter focuses too much on hypothetical and long-term risks and as The Verge noted, it is unlikely to have any effect on the current climate in AI research.

 

However, others insisted AI tools have already raised some social or ethical problems that cannot be ignored, such as providing biased responses, spreading misinformation, and impacting consumer privacy.

 

Concerns about AI technology's negative influence have been increasingly intensified, particularly with the upgrade of ChatGPT. For example, the ChatGPT enables errant students to cheat on academic performance. According to Reuters, the EU police force Europol warned about the potential misuse of the system in phishing attempts, disinformation, and cybercrime.

 

Even Open AI, creators of ChatGPT, said in a statement that at some point, "It may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."

 

Simeon Campos, the CEO of AI safety startup SaferAI, told TIME magazine he signed the letter because it is impossible to manage the risks of systems, when even the inventors of those systems don't know exactly how they work, don't know what they're capable of, and don't know how to place limits on their behavior.

 

It is undeniable that humanity greatly benefits from the advancement of AI technology. But if cutting-edge technology poses potential and unpredictable risks to human survival, who should undertake the responsibility of such devastating consequences?

 

It's not only about the AI creators and developers, but everyone on the planet.

 

So the open letter isn't perfect, but the spirit in which it is written is correct.

 

"We need to slow down until we better understand the ramifications," Gary Marcus, a professor at New York University who also signed the letter, told Reuters.

 

We need to attach more importance to regulations and laws on AI before it's too late. The future of humanity should be determined by humanity, rather than the AI systems.